11 research outputs found

    A Multiple Criteria Decision Analysis based Approach to Remove Uncertainty in SMP Models

    Full text link
    Advanced AI technologies are serving humankind in a number of ways, from healthcare to manufacturing. Advanced automated machines are quite expensive, but the end output is supposed to be of the highest possible quality. Depending on the agility of requirements, these automation technologies can change dramatically. The likelihood of making changes to automation software is extremely high, so it must be updated regularly. If maintainability is not taken into account, it will have an impact on the entire system and increase maintenance costs. Many companies use different programming paradigms in developing advanced automated machines based on client requirements. Therefore, it is essential to estimate the maintainability of heterogeneous software. As a result of the lack of widespread consensus on software maintainability prediction (SPM) methodologies, individuals and businesses are left perplexed when it comes to determining the appropriate model for estimating the maintainability of software, which serves as the inspiration for this research. A structured methodology was designed, and the datasets were preprocessed and maintainability index (MI) range was also found for all the datasets expect for UIMS and QUES, the metric CHANGE is used for UIMS and QUES. To remove the uncertainty among the aforementioned techniques, a popular multiple criteria decision-making model, namely the technique for order preference by similarity to ideal solution (TOPSIS), is used in this work. TOPSIS revealed that GARF outperforms the other considered techniques in predicting the maintainability of heterogeneous automated software.Comment: Submitted for peer revie

    Blockchain for the metaverse: A Review

    Get PDF
    Since Facebook officially changed its name to Meta in Oct. 2021, the metaverse has become a new norm of social networks and three-dimensional (3D) virtual worlds. The metaverse aims to bring 3D immersive and personalized experiences to users by leveraging many pertinent technologies. Despite great attention and benefits, a natural question in the metaverse is how to secure its users’ digital content and data. In this regard, blockchain is a promising solution owing to its distinct features of decentralization, immutability, and transparency. To better understand the role of blockchain in the metaverse, we aim to provide an extensive survey on the applications of blockchain for the metaverse. We first present a preliminary to blockchain and the metaverse and highlight the motivations behind the use of blockchain for the metaverse. Next, we extensively discuss blockchain-based methods for the metaverse from technical perspectives, such as data acquisition, data storage, data sharing, data interoperability, and data privacy preservation. For each perspective, we first discuss the technical challenges of the metaverse and then highlight how blockchain can help. Moreover, we investigate the impact of blockchain on key-enabling technologies in the metaverse, including Internet-of-Things, digital twins, multi-sensory and immersive applications, artificial intelligence, and big data. We also present some major projects to showcase the role of blockchain in metaverse applications and services. Finally, we present some promising directions to drive further research innovations and developments toward the use of blockchain in the metaverse in the future

    A Survey on Federated Learning for the Healthcare Metaverse: Concepts, Applications, Challenges, and Future Directions

    Full text link
    Recent technological advancements have considerately improved healthcare systems to provide various intelligent healthcare services and improve the quality of life. Federated learning (FL), a new branch of artificial intelligence (AI), opens opportunities to deal with privacy issues in healthcare systems and exploit data and computing resources available at distributed devices. Additionally, the Metaverse, through integrating emerging technologies, such as AI, cloud edge computing, Internet of Things (IoT), blockchain, and semantic communications, has transformed many vertical domains in general and the healthcare sector in particular. Obviously, FL shows many benefits and provides new opportunities for conventional and Metaverse healthcare, motivating us to provide a survey on the usage of FL for Metaverse healthcare systems. First, we present preliminaries to IoT-based healthcare systems, FL in conventional healthcare, and Metaverse healthcare. The benefits of FL in Metaverse healthcare are then discussed, from improved privacy and scalability, better interoperability, better data management, and extra security to automation and low-latency healthcare services. Subsequently, we discuss several applications pertaining to FL-enabled Metaverse healthcare, including medical diagnosis, patient monitoring, medical education, infectious disease, and drug discovery. Finally, we highlight significant challenges and potential solutions toward the realization of FL in Metaverse healthcare.Comment: Submitted to peer revie

    A multiple criteria decision analysis based approach to remove uncertainty in SMP models

    No full text
    Abstract Software has to be updated frequently to match the customer needs. If software maintainability is not given priority, it affects the software development life cycle and maintenance expenses, which deplete organizational assets. Before releasing software, maintainability must be estimated, as the impact of bugs and errors can affect the cost and reputation of the organization after deployment. Regardless of the programming paradigm, it’s important to assess software maintainability. Many software maintainability prediction models’ compatibilities with new programming paradigms are criticized because their limited applicability over heterogeneous datasets. Due this challenge small and medium-sized organizations may even skip the maintainability assessment, resulting in huge lose to such organizations. Motivated by this fact, we used Genetic Algorithm optimized Random Forest technique (GA) for software maintainability prediction models over heterogeneous datasets. To find optimal model for software maintainability prediction, the Technique for Order preference by Similarity to Ideal Solution (TOPSIS), a popular multiple-criteria decision-making model, is adopted. From the results, it is concluded that the GA is optimal for predicting maintainability of software developed in various paradigms

    Handling the Class Imbalance Problem With an Improved Sine Cosine Algorithm for Optimal Instance Selection

    No full text
    Class imbalance is a significant study problem that is biased, exhibiting excellent performance toward the majority classes in the dataset while showing inferior performance toward minority classes. When dealing with real-world issues, this kind of biased nature affects classification accuracy. The Improved Binary Sine Cosine Algorithm (IBSCA) has been used in this work to identify a subset of the majority class in the best possible way. The proposed IBSCA makes some enhancements over the conventional Binary Sine Cosine Algorithm (BSCA) to address the issue of premature convergence with local optimal solutions. To improve classification accuracy for unbalanced datasets, the proposed IBSCA seeks to identify the optimal collection of instances from the majority class. The advised IBSCA makes use of the alpha agent, beta agent, and random agent’s location, which tends to devote considerable time to exploration to find the best possible set of instances. By using the geometric mean (G-mean) and F-score to describe the fitness function, the proposed IBSCA aims to solve the multi-objective optimization issue. On 18 datasets with different imbalance ratios taken from the KEEL repository, experimentation is conducted. Comparisons are made between the suggested IBSCA and the traditional Binary Sine Cosine Algorithm, Binary Particle Swarm Optimization (BPSO), and Binary Grey Wolf Optimization (BGWO). Additionally, the performance of the suggested IBSCA is evaluated against the top outcomes from different research papers. Metrics like sensitivity, F-score, G-mean, and area under curve (AUC) show that the suggested IBSCA outperforms the state-of-the-art algorithms. The statistical findings using the Wilcoxon signed rank test and Friedman test also demonstrate that the suggested IBSCA is more efficient than the other conventional algorithms

    Enhancing Security of Host-Based Intrusion Detection Systems for the Internet of Things

    No full text
    The Internet of Things (IoT) infrastructure enables smart devices to learn, think, speak and perform. The facilities of the IoT devices can be enhanced to support an intelligent application through technologies like fog computing, smart networks, federated learning or explainable artificial intelligence infrastructures. In all these cases networking of IoT devices becomes inevitable. Whereever there exists a network, a threat to the network infrastructure is also possible. The proposed work classifies various attacks on the hosts with the support of proven machine learning (ML) algorithms. This work performs the comparative analysis of all these classification parameters of the machine learning algorithms with the use of fuzzy-based recommendation systems. This work also lists out various incidents of intrusions on the IoT hosts in appropriate layers of the interface and proposes an efficient algorithm and framework to overcome the occurrences of intrusions on the host side. In particular, we propose an effective security framework to deal with the intrusions that can deteriorate the host-based systems. The ranking of the algorithms is evaluated using fuzzy-based recommendation systems such as TOPSIS, VIKOR, MORA, WASPAS. The ensemble of machine learning algorithms such as Decision Tree, Lite Gradient Boost, Xtra Gradient Boost and Random Forest provide better values of accuracy (around 99%) with higher precision, re-call and F1-scores, thus proving their efficacy for intrusion detection in IoT networks

    From Assistive Technologies to Metaverse: Technologies in Inclusive Higher Education for Students with Specific Learning Difficulties

    Full text link
    The development of new technologies and their expanding use in a wide range of educational environments are driving the transformation of higher education. Assistive technologies are a subset of cutting-edge technology that can help students learn more effectively and make education accessible to everyone. Assistive technology can enhance, maintain, or improve the capacities of students with learning difficulties. Students with learning difficulties will be greatly benefited from the use of assistive technologies. If these technologies are used effectively, students with learning difficulties can compete with their peers and complete their academic tasks. We aim to conduct this review to better understand the role of assistive technologies in providing inclusive higher education for students with learning difficulties. The review begins with the introduction of learning difficulties and their causes; inclusive education and the need for assistive technologies; the reasoning for conducting this review; and a summary of related reviews on assistive technologies for students with learning difficulties in inclusive higher education. Then, we discuss the preliminaries for the learning difficulties type and assistive technology. Later, we discuss the effects of assistive technology on inclusive higher education for students with learning difficulties. Additionally, we discuss related projects and support tools available in inclusive higher education for students with learning difficulties. We also explore the challenges and possible solutions related to using assistive technology in higher education to provide inclusive education for students with learning difficulties. We conclude the review with a discussion of potential promising future directions.Comment: Submitted to peer revie

    Federated Learning for intrusion detection system: Concepts, challenges and future directions

    No full text
    International audienceThe rapid development of the Internet and smart devices trigger surge in network traffic making its infrastructure more complex and heterogeneous. The predominated usage of mobile phones, wearable devices and autonomous vehicles are examples of distributed networks which generate huge amount of data each and every day. The computational power of these devices have also seen steady progression which has created the need to transmit information, store data locally and drive network computations towards edge devices. Intrusion detection systems play a significant role in ensuring security and privacy of such devices. Machine Learning and Deep Learning with Intrusion Detection Systems have gained great momentum due to their achievement of high classification accuracy. However the privacy and security aspects potentially gets jeopardised due to the need of storing and communicating data to centralized server.On the contrary, federated learning (FL) fits in appropriately as a privacy-preserving decentralized learning technique that does not transfer data but trains models locally and transfers the parameters to the centralized server. The present paper aims to present an extensive and exhaustive review on the use of FL in intrusion detection system. In order to establish the need for FL, various types of IDS, relevant ML approaches and its associated issues are discussed. The paper presents detailed overview of the implementation of FL in various aspects of anomaly detection. The allied challenges of FL implementations are also identified which provides idea on the scope of future direction of research.The paper finally presents the plausible solutions associated with the identified challenges in FL based intrusion detection system implementation acting as a baseline for prospective research

    Incentive techniques for the Internet of Things: A survey

    No full text
    International audienceThe Internet of Things (IoT) has remarkably evolved over the last few years to realize a wide range of newly emerging services and applications empowered by the unprecedented proliferation of smart devices. The quality of IoT networks heavily relies on the involvement of devices for undertaking functions from data sensing, computation to communication and IoT intelligence. Stimulating IoT devices to actively participate and contribute to the network is a practical challenge, where incentive techniques such as blockchain, game theory, and Artificial Intelligence (AI) are highly desirable to build a sustainable IoT ecosystem. In this article, we present a systematic literature review of the incentive techniques for IoT, aiming to provide general readers with an overview of incentive-enabled IoT from background, motivations, and enabling techniques. Particularly, we first present the fundamentals of IoT data network infrastructure, and several key incentive techniques for IoT are described in details, including blockchain, game theory, and AI. We next provide an extensive review on the use of these incentive techniques in a number of key IoT services, such as IoT data sharing, IoT data offloading and caching, IoT mobile crowdsensing, and IoT security and privacy. Subsequently, we explore the potential of incentives in important IoT applications, ranging from smart healthcare, smart transportation to smart city and smart industry. The research challenges of incentive techniques in IoT networks are highlighted, and the potential directions are also pointed out for future research of this important area

    A Comprehensive Analysis of Blockchain Applications for Securing Computer Vision Systems

    No full text
    Blockchain (BC) and Computer Vision (CV) are the two emerging fields with the potential to transform various sectors. BC can offer decentralized and secure data storage, while CV allows machines to learn and understand visual data. The integration of the two technologies holds massive promise for developing innovative applications that can provide solutions to the challenges in various sectors such as supply chain management, healthcare, smart cities, and defense. This review explores a comprehensive analysis of the integration of BC and CV by examining their combination and potential applications. It also provides a detailed analysis of the fundamental concepts of both technologies, highlighting their strengths and limitations. This paper also explores current research efforts that make use of the benefits offered by this combination. The BC can be used as an added layer of security in CV systems and also ensure data integrity, enabling decentralized image and video analytics. The challenges and open issues associated with this integration are also identified, and appropriate potential future directions are also proposed
    corecore